The purpose of this project is to develop a YOLOV5 license plate detection model that can automatically localize license plates in images with high accuracy and efficiency. License plate detection is important in various applications, including traffic management, parking management, toll collection, law enforcement, and surveillance. Manual detection of license plates is a challenging task due to the large number of vehicles on the road and the variability in license plate design and positioning. Manual detection also requires a significant amount of time and resources, which can be a bottleneck in large-scale applications.
Data collection and preprocessing:
https://www.kaggle.com/datasets/andrewmvd/car-plate-detection
The 433 images with bounding box annotations which is collected on kaggle will be used for training and testing the model. We will preprocessing the dataset by Resize the images, Normalize the pixel values and Extract the license plate region.
Jincheng Jiang Zhuang Miao
Clone and install the YOLO from github repo
!git clone https://github.com/ultralytics/yolov5
%cd yolov5
!pip install -qr requirements.txt
Cloning into 'yolov5'... remote: Enumerating objects: 15365, done. remote: Counting objects: 100% (10/10), done. remote: Compressing objects: 100% (10/10), done. remote: Total 15365 (delta 1), reused 2 (delta 0), pack-reused 15355 Receiving objects: 100% (15365/15365), 14.36 MiB | 28.39 MiB/s, done. Resolving deltas: 100% (10504/10504), done. /kaggle/working/yolov5 ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts. dask-cudf 21.12.2 requires cupy-cuda115, which is not installed. cudf 21.12.2 requires cupy-cuda115, which is not installed. tensorflow 2.11.0 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible. tensorflow-transform 1.12.0 requires pyarrow<7,>=6, but you have pyarrow 5.0.0 which is incompatible. tensorflow-serving-api 2.11.0 requires protobuf<3.20,>=3.9.2, but you have protobuf 3.20.3 which is incompatible. librosa 0.10.0.post2 requires soundfile>=0.12.1, but you have soundfile 0.11.0 which is incompatible. distributed 2021.11.2 requires dask==2021.11.2, but you have dask 2022.2.0 which is incompatible. dask-cudf 21.12.2 requires dask<=2021.11.2,>=2021.11.1, but you have dask 2022.2.0 which is incompatible. cloud-tpu-client 0.10 requires google-api-python-client==1.8.0, but you have google-api-python-client 2.82.0 which is incompatible. WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
!sudo apt-get update
Get:1 http://packages.cloud.google.com/apt gcsfuse-focal InRelease [5002 B] Get:2 https://packages.cloud.google.com/apt cloud-sdk InRelease [6361 B] Get:3 https://packages.cloud.google.com/apt google-fast-socket InRelease [5015 B] Get:4 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 InRelease [1581 B] Get:5 http://security.ubuntu.com/ubuntu focal-security InRelease [114 kB] Get:6 https://packages.cloud.google.com/apt cloud-sdk/main amd64 Packages [416 kB] Hit:7 http://archive.ubuntu.com/ubuntu focal InRelease Get:8 http://archive.ubuntu.com/ubuntu focal-updates InRelease [114 kB] Get:9 https://developer.download.nvidia.com/compute/cuda/repos/ubuntu2004/x86_64 Packages [969 kB] Get:10 http://security.ubuntu.com/ubuntu focal-security/restricted amd64 Packages [2060 kB] Get:11 http://archive.ubuntu.com/ubuntu focal-backports InRelease [108 kB] Get:12 http://security.ubuntu.com/ubuntu focal-security/universe amd64 Packages [1027 kB] Get:13 http://security.ubuntu.com/ubuntu focal-security/main amd64 Packages [2590 kB] Get:14 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 Packages [3069 kB] Get:15 http://archive.ubuntu.com/ubuntu focal-updates/restricted amd64 Packages [2198 kB] Get:16 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 Packages [1323 kB] Get:17 http://archive.ubuntu.com/ubuntu focal-backports/main amd64 Packages [55.2 kB] Get:18 http://archive.ubuntu.com/ubuntu focal-backports/universe amd64 Packages [28.6 kB] Fetched 14.1 MB in 2s (6161 kB/s) Reading package lists... Done
!sudo apt-get install python3-venv --yes
Reading package lists... Done Building dependency tree Reading state information... Done The following additional packages will be installed: libpython3.8 libpython3.8-minimal libpython3.8-stdlib python3.8 python3.8-minimal python3.8-venv Suggested packages: python3.8-doc binfmt-support The following NEW packages will be installed: python3-venv python3.8-venv The following packages will be upgraded: libpython3.8 libpython3.8-minimal libpython3.8-stdlib python3.8 python3.8-minimal 5 upgraded, 2 newly installed, 0 to remove and 73 not upgraded. Need to get 6314 kB of archives. After this operation, 46.1 kB of additional disk space will be used. Get:1 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 libpython3.8 amd64 3.8.10-0ubuntu1~20.04.7 [1626 kB] Get:2 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 python3.8 amd64 3.8.10-0ubuntu1~20.04.7 [387 kB] Get:3 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 libpython3.8-stdlib amd64 3.8.10-0ubuntu1~20.04.7 [1675 kB] Get:4 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 python3.8-minimal amd64 3.8.10-0ubuntu1~20.04.7 [1903 kB] Get:5 http://archive.ubuntu.com/ubuntu focal-updates/main amd64 libpython3.8-minimal amd64 3.8.10-0ubuntu1~20.04.7 [717 kB] Get:6 http://archive.ubuntu.com/ubuntu focal-updates/universe amd64 python3.8-venv amd64 3.8.10-0ubuntu1~20.04.7 [5452 B] Get:7 http://archive.ubuntu.com/ubuntu focal/universe amd64 python3-venv amd64 3.8.2-0ubuntu2 [1228 B] Fetched 6314 kB in 1s (7724 kB/s) (Reading database ... 111522 files and directories currently installed.) Preparing to unpack .../0-libpython3.8_3.8.10-0ubuntu1~20.04.7_amd64.deb ... Unpacking libpython3.8:amd64 (3.8.10-0ubuntu1~20.04.7) over (3.8.10-0ubuntu1~20.04.6) ... Preparing to unpack .../1-python3.8_3.8.10-0ubuntu1~20.04.7_amd64.deb ... Unpacking python3.8 (3.8.10-0ubuntu1~20.04.7) over (3.8.10-0ubuntu1~20.04.6) ... Preparing to unpack .../2-libpython3.8-stdlib_3.8.10-0ubuntu1~20.04.7_amd64.deb ... Unpacking libpython3.8-stdlib:amd64 (3.8.10-0ubuntu1~20.04.7) over (3.8.10-0ubuntu1~20.04.6) ... Preparing to unpack .../3-python3.8-minimal_3.8.10-0ubuntu1~20.04.7_amd64.deb ... Unpacking python3.8-minimal (3.8.10-0ubuntu1~20.04.7) over (3.8.10-0ubuntu1~20.04.6) ... Preparing to unpack .../4-libpython3.8-minimal_3.8.10-0ubuntu1~20.04.7_amd64.deb ... Unpacking libpython3.8-minimal:amd64 (3.8.10-0ubuntu1~20.04.7) over (3.8.10-0ubuntu1~20.04.6) ... Selecting previously unselected package python3.8-venv. Preparing to unpack .../5-python3.8-venv_3.8.10-0ubuntu1~20.04.7_amd64.deb ... Unpacking python3.8-venv (3.8.10-0ubuntu1~20.04.7) ... Selecting previously unselected package python3-venv. Preparing to unpack .../6-python3-venv_3.8.2-0ubuntu2_amd64.deb ... Unpacking python3-venv (3.8.2-0ubuntu2) ... Setting up libpython3.8-minimal:amd64 (3.8.10-0ubuntu1~20.04.7) ... Setting up python3.8-minimal (3.8.10-0ubuntu1~20.04.7) ... Setting up libpython3.8-stdlib:amd64 (3.8.10-0ubuntu1~20.04.7) ... Setting up python3.8 (3.8.10-0ubuntu1~20.04.7) ... Setting up libpython3.8:amd64 (3.8.10-0ubuntu1~20.04.7) ... Setting up python3.8-venv (3.8.10-0ubuntu1~20.04.7) ... Setting up python3-venv (3.8.2-0ubuntu2) ... Processing triggers for libc-bin (2.31-0ubuntu9.9) ... Processing triggers for man-db (2.9.1-1) ... Processing triggers for mime-support (3.64ubuntu1) ...
Build a virtual environment to run yolov5
!python3 -m venv yolov5-env
!source yolov5-env/bin/activate
!pip install torch==1.9.0+cu111 torchvision==0.10.0+cu111 -f https://download.pytorch.org/whl/cu111/torch_stable.html
!pip install wandb
!pip install pandas
!pip install matplotlib
Looking in links: https://download.pytorch.org/whl/cu111/torch_stable.html
Collecting torch==1.9.0+cu111
Downloading https://download.pytorch.org/whl/cu111/torch-1.9.0%2Bcu111-cp37-cp37m-linux_x86_64.whl (2041.3 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 2.0/2.0 GB 466.5 kB/s eta 0:00:000:01m00:01
Collecting torchvision==0.10.0+cu111
Downloading https://download.pytorch.org/whl/cu111/torchvision-0.10.0%2Bcu111-cp37-cp37m-linux_x86_64.whl (23.2 MB)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 23.2/23.2 MB 52.9 MB/s eta 0:00:0000:0100:01
Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from torch==1.9.0+cu111) (4.4.0)
Requirement already satisfied: numpy in /opt/conda/lib/python3.7/site-packages (from torchvision==0.10.0+cu111) (1.21.6)
Requirement already satisfied: pillow>=5.3.0 in /opt/conda/lib/python3.7/site-packages (from torchvision==0.10.0+cu111) (9.4.0)
Installing collected packages: torch, torchvision
Attempting uninstall: torch
Found existing installation: torch 1.13.0
Uninstalling torch-1.13.0:
Successfully uninstalled torch-1.13.0
Attempting uninstall: torchvision
Found existing installation: torchvision 0.14.0
Uninstalling torchvision-0.14.0:
Successfully uninstalled torchvision-0.14.0
ERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.
pytorch-lightning 1.9.4 requires torch>=1.10.0, but you have torch 1.9.0+cu111 which is incompatible.
kornia 0.6.10 requires torch>=1.9.1, but you have torch 1.9.0+cu111 which is incompatible.
Successfully installed torch-1.9.0+cu111 torchvision-0.10.0+cu111
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Requirement already satisfied: wandb in /opt/conda/lib/python3.7/site-packages (0.14.0)
Requirement already satisfied: protobuf!=4.21.0,<5,>=3.12.0 in /opt/conda/lib/python3.7/site-packages (from wandb) (3.20.3)
Requirement already satisfied: pathtools in /opt/conda/lib/python3.7/site-packages (from wandb) (0.1.2)
Requirement already satisfied: appdirs>=1.4.3 in /opt/conda/lib/python3.7/site-packages (from wandb) (1.4.4)
Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from wandb) (4.4.0)
Requirement already satisfied: PyYAML in /opt/conda/lib/python3.7/site-packages (from wandb) (6.0)
Requirement already satisfied: setuptools in /opt/conda/lib/python3.7/site-packages (from wandb) (67.6.1)
Requirement already satisfied: GitPython!=3.1.29,>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from wandb) (3.1.30)
Requirement already satisfied: sentry-sdk>=1.0.0 in /opt/conda/lib/python3.7/site-packages (from wandb) (1.17.0)
Requirement already satisfied: psutil>=5.0.0 in /opt/conda/lib/python3.7/site-packages (from wandb) (5.9.3)
Requirement already satisfied: setproctitle in /opt/conda/lib/python3.7/site-packages (from wandb) (1.3.2)
Requirement already satisfied: docker-pycreds>=0.4.0 in /opt/conda/lib/python3.7/site-packages (from wandb) (0.4.0)
Requirement already satisfied: Click!=8.0.0,>=7.0 in /opt/conda/lib/python3.7/site-packages (from wandb) (8.1.3)
Requirement already satisfied: requests<3,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from wandb) (2.28.2)
Requirement already satisfied: importlib-metadata in /opt/conda/lib/python3.7/site-packages (from Click!=8.0.0,>=7.0->wandb) (4.11.4)
Requirement already satisfied: six>=1.4.0 in /opt/conda/lib/python3.7/site-packages (from docker-pycreds>=0.4.0->wandb) (1.16.0)
Requirement already satisfied: gitdb<5,>=4.0.1 in /opt/conda/lib/python3.7/site-packages (from GitPython!=3.1.29,>=1.0.0->wandb) (4.0.10)
Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.0.0->wandb) (2022.12.7)
Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.0.0->wandb) (3.4)
Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.0.0->wandb) (2.1.1)
Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.0.0->wandb) (1.26.14)
Requirement already satisfied: smmap<6,>=3.0.1 in /opt/conda/lib/python3.7/site-packages (from gitdb<5,>=4.0.1->GitPython!=3.1.29,>=1.0.0->wandb) (5.0.0)
Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata->Click!=8.0.0,>=7.0->wandb) (3.11.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Requirement already satisfied: pandas in /opt/conda/lib/python3.7/site-packages (1.3.5)
Requirement already satisfied: python-dateutil>=2.7.3 in /opt/conda/lib/python3.7/site-packages (from pandas) (2.8.2)
Requirement already satisfied: pytz>=2017.3 in /opt/conda/lib/python3.7/site-packages (from pandas) (2023.2)
Requirement already satisfied: numpy>=1.17.3 in /opt/conda/lib/python3.7/site-packages (from pandas) (1.21.6)
Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.7.3->pandas) (1.16.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
Requirement already satisfied: matplotlib in /opt/conda/lib/python3.7/site-packages (3.5.3)
Requirement already satisfied: numpy>=1.17 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (1.21.6)
Requirement already satisfied: fonttools>=4.22.0 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (4.38.0)
Requirement already satisfied: cycler>=0.10 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (0.11.0)
Requirement already satisfied: python-dateutil>=2.7 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (2.8.2)
Requirement already satisfied: packaging>=20.0 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (23.0)
Requirement already satisfied: pyparsing>=2.2.1 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (3.0.9)
Requirement already satisfied: pillow>=6.2.0 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (9.4.0)
Requirement already satisfied: kiwisolver>=1.0.1 in /opt/conda/lib/python3.7/site-packages (from matplotlib) (1.4.4)
Requirement already satisfied: typing-extensions in /opt/conda/lib/python3.7/site-packages (from kiwisolver>=1.0.1->matplotlib) (4.4.0)
Requirement already satisfied: six>=1.5 in /opt/conda/lib/python3.7/site-packages (from python-dateutil>=2.7->matplotlib) (1.16.0)
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
!pip install -qr requirements.txt
WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv
import yolov5
Import the necessary libraries Suppress warning messages that may occur during the execution of a Python program
import pandas as pd
import numpy as np
import os
import glob
from datetime import datetime
import xml.etree.ElementTree as ET
import cv2
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
The path of plate annotations and car images
a_path = "/kaggle/input/car-plate-detection/annotations"
i_path = "/kaggle/input/car-plate-detection/images"
Set the list for img and plate location storage
X = []
y = []
Build new dataset to store the annotation information
dataset = {
"filename":[],
"width":[],
"height":[],
"xmin":[],
"ymin":[],
"xmax":[],
"ymax":[],
}
from lxml import etree
Extracting information from XML files that contain annotations for images, such as image size (width and height) and bounding box coordinates (xmin, ymin, xmax, ymax) for object detection tasks.
# extract the size of image
for annotation in glob.glob(a_path+"/*.xml"): #extract xml files
ETtree = etree.parse(annotation)
tree = ET.parse(annotation)
# according to tag in xml file, extract the information of img and plate location
for dim in ETtree.xpath("size"):
width = int(dim.xpath("width")[0].text)
height = int(dim.xpath("height")[0].text)
for dim in ETtree.xpath("object/bndbox"):
xmin = int(dim.xpath("xmin")[0].text)
ymin = int(dim.xpath("ymin")[0].text)
xmax = int(dim.xpath("xmax")[0].text)
ymax = int(dim.xpath("ymax")[0].text)
y.append([int(xmax), int(ymax), int(xmin), int(ymin)]) # put the plate location information in a list
#get the name of the xml file and find the related img (some pictures have multiple plates)
data_name=[annotation.split('/')[-1][0:-4]]
img_name = data_name[0]
img_name = img_name + ".png"
img_path = os.path.join(i_path, img_name)
img_path = img_path.replace("\\", "/")
img1 = glob.glob(img_path)
img = cv2.imread(img1[0])
X.append(np.array(img))
dataset['filename'].append(img_name)
dataset['width'].append(width)
dataset['height'].append(height)
dataset['xmin'].append(xmin)
dataset['ymin'].append(ymin)
dataset['xmax'].append(xmax)
dataset['ymax'].append(ymax)
data_plate=pd.DataFrame(dataset)
data_plate
| filename | width | height | xmin | ymin | xmax | ymax | |
|---|---|---|---|---|---|---|---|
| 0 | Cars339.png | 500 | 300 | 209 | 135 | 283 | 169 |
| 1 | Cars13.png | 400 | 268 | 191 | 147 | 242 | 169 |
| 2 | Cars74.png | 400 | 267 | 115 | 115 | 277 | 153 |
| 3 | Cars16.png | 400 | 221 | 36 | 175 | 62 | 186 |
| 4 | Cars291.png | 517 | 303 | 71 | 205 | 215 | 246 |
| ... | ... | ... | ... | ... | ... | ... | ... |
| 466 | Cars166.png | 300 | 400 | 148 | 123 | 201 | 150 |
| 467 | Cars60.png | 400 | 300 | 45 | 98 | 364 | 159 |
| 468 | Cars52.png | 400 | 300 | 226 | 181 | 327 | 210 |
| 469 | Cars297.png | 400 | 233 | 158 | 149 | 247 | 170 |
| 470 | Cars349.png | 400 | 267 | 38 | 225 | 146 | 249 |
471 rows × 7 columns
X[2]
array([[[ 29, 72, 67],
[ 53, 94, 92],
[ 63, 104, 102],
...,
[117, 104, 84],
[118, 105, 85],
[121, 107, 89]],
[[ 31, 74, 69],
[ 53, 95, 92],
[ 63, 103, 102],
...,
[120, 108, 87],
[120, 107, 87],
[123, 110, 91]],
[[ 44, 90, 85],
[ 56, 100, 96],
[ 62, 105, 101],
...,
[144, 132, 110],
[138, 125, 103],
[141, 128, 107]],
...,
[[228, 227, 225],
[214, 213, 211],
[206, 206, 204],
...,
[ 19, 18, 20],
[ 10, 9, 11],
[ 13, 12, 14]],
[[203, 203, 199],
[192, 191, 188],
[197, 196, 193],
...,
[ 27, 27, 28],
[ 19, 17, 19],
[ 14, 13, 14]],
[[180, 182, 177],
[178, 179, 175],
[189, 188, 184],
...,
[ 30, 30, 31],
[ 16, 14, 16],
[ 12, 11, 12]]], dtype=uint8)
y[2]
[277, 153, 115, 115]
Display the image with the bounding box
image = cv2.rectangle(X[2],(y[2][0],y[2][1]),(y[2][2],y[2][3]),(0, 0, 255))
plt.imshow(image)
plt.show()
!mkdir "/kaggle/working/labels"
Processing image annotations in the YOLOv5 format. Normalizes the xmin, ymin, xmax, ymax values to YOLOv5 format by dividing them by the width and height of the image, adds the calculated x_center, y_center, frame_width, and frame_height values as new columns to the data_plate dataframe.
x_center = []
y_center = []
frame_width = []
frame_height = []
save_type = 'w'
for i, row in enumerate(data_plate.iloc):
# delete the ".png" in filename
current_filename = str(row.filename[:-4])
# extract the information of image
width, height, xmin, ymin, xmax, ymax = list(data_plate.iloc[i][-6:])
# normalize the information to YOLOV5 type
x=(xmin+xmax)/2/width
y=(ymin+ymax)/2/height
width=(xmax-xmin)/width
height=(ymax-ymin)/height
x_center.append(x)
y_center.append(y)
frame_width.append(width)
frame_height.append(height)
# create a string with the YOLOv5 format label
txt = '0' + ' ' + str(x) + ' ' + str(y) + ' ' + str(width) + ' ' + str(height) + '\n'
# create separate annotation files for each image
if i > 0:
previous_filename = str(data_plate.filename[i-1][:-4])
save_type='a+' if current_filename == previous_filename else 'w'
with open("/kaggle/working/labels/" + str(row.filename[:-4]) +'.txt', save_type) as f:
f.write(txt)
data_plate['x_center']=x_center
data_plate['y_center']=y_center
data_plate['frame_width']=frame_width
data_plate['frame_height']=frame_height
import os
files = os.listdir('/kaggle/working/labels')
print(files[0])
Cars145.txt
len(files)
433
data_plate
| filename | width | height | xmin | ymin | xmax | ymax | x_center | y_center | frame_width | frame_height | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | Cars339.png | 500 | 300 | 209 | 135 | 283 | 169 | 0.492000 | 0.506667 | 0.148000 | 0.113333 |
| 1 | Cars13.png | 400 | 268 | 191 | 147 | 242 | 169 | 0.541250 | 0.589552 | 0.127500 | 0.082090 |
| 2 | Cars74.png | 400 | 267 | 115 | 115 | 277 | 153 | 0.490000 | 0.501873 | 0.405000 | 0.142322 |
| 3 | Cars16.png | 400 | 221 | 36 | 175 | 62 | 186 | 0.122500 | 0.816742 | 0.065000 | 0.049774 |
| 4 | Cars291.png | 517 | 303 | 71 | 205 | 215 | 246 | 0.276596 | 0.744224 | 0.278530 | 0.135314 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 466 | Cars166.png | 300 | 400 | 148 | 123 | 201 | 150 | 0.581667 | 0.341250 | 0.176667 | 0.067500 |
| 467 | Cars60.png | 400 | 300 | 45 | 98 | 364 | 159 | 0.511250 | 0.428333 | 0.797500 | 0.203333 |
| 468 | Cars52.png | 400 | 300 | 226 | 181 | 327 | 210 | 0.691250 | 0.651667 | 0.252500 | 0.096667 |
| 469 | Cars297.png | 400 | 233 | 158 | 149 | 247 | 170 | 0.506250 | 0.684549 | 0.222500 | 0.090129 |
| 470 | Cars349.png | 400 | 267 | 38 | 225 | 146 | 249 | 0.230000 | 0.887640 | 0.270000 | 0.089888 |
471 rows × 11 columns
duplicate_filenames = data_plate[data_plate['filename'].duplicated(keep=False)]['filename'].unique()
duplicate_rows = data_plate[data_plate['filename'].isin(duplicate_filenames)]
print(duplicate_rows)
filename width height xmin ymin xmax ymax x_center y_center \
7 Cars132.png 400 225 23 190 56 198 0.09875 0.862222
8 Cars132.png 400 225 378 188 400 200 0.97250 0.862222
48 Cars295.png 400 256 52 170 73 182 0.15625 0.687500
49 Cars295.png 400 256 237 143 271 162 0.63500 0.595703
81 Cars358.png 400 200 44 94 107 107 0.18875 0.502500
.. ... ... ... ... ... ... ... ... ...
406 Cars413.png 400 256 51 170 73 183 0.15500 0.689453
425 Cars103.png 400 196 230 129 248 134 0.59750 0.670918
426 Cars103.png 400 196 189 116 202 121 0.48875 0.604592
448 Cars85.png 400 267 86 218 146 235 0.29000 0.848315
449 Cars85.png 400 267 313 129 358 141 0.83875 0.505618
frame_width frame_height
7 0.0825 0.035556
8 0.0550 0.053333
48 0.0525 0.046875
49 0.0850 0.074219
81 0.1575 0.065000
.. ... ...
406 0.0550 0.050781
425 0.0450 0.025510
426 0.0325 0.025510
448 0.1500 0.063670
449 0.1125 0.044944
[62 rows x 11 columns]
# some images have multiple bounding box with separate information
filtered_data = data_plate[data_plate['filename'] == 'Cars295.png']
filtered_data
| filename | width | height | xmin | ymin | xmax | ymax | x_center | y_center | frame_width | frame_height | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 48 | Cars295.png | 400 | 256 | 52 | 170 | 73 | 182 | 0.15625 | 0.687500 | 0.0525 | 0.046875 |
| 49 | Cars295.png | 400 | 256 | 237 | 143 | 271 | 162 | 0.63500 | 0.595703 | 0.0850 | 0.074219 |
Images with multiple bboxes store consistent yolov5 information in one txt file
with open('/kaggle/working/labels/Cars295.txt', 'r') as file:
contents = file.read()
print(contents)
0 0.15625 0.6875 0.0525 0.046875 0 0.635 0.595703125 0.085 0.07421875
# create a list of img names
img_names = [*os.listdir("/kaggle/input/car-plate-detection/images")]
img_names[0:5]
['Cars393.png', 'Cars376.png', 'Cars87.png', 'Cars190.png', 'Cars177.png']
data_plate['filename'] = data_plate['filename'].str.replace('.png', '')
data_plate
| filename | width | height | xmin | ymin | xmax | ymax | x_center | y_center | frame_width | frame_height | |
|---|---|---|---|---|---|---|---|---|---|---|---|
| 0 | Cars339 | 500 | 300 | 209 | 135 | 283 | 169 | 0.492000 | 0.506667 | 0.148000 | 0.113333 |
| 1 | Cars13 | 400 | 268 | 191 | 147 | 242 | 169 | 0.541250 | 0.589552 | 0.127500 | 0.082090 |
| 2 | Cars74 | 400 | 267 | 115 | 115 | 277 | 153 | 0.490000 | 0.501873 | 0.405000 | 0.142322 |
| 3 | Cars16 | 400 | 221 | 36 | 175 | 62 | 186 | 0.122500 | 0.816742 | 0.065000 | 0.049774 |
| 4 | Cars291 | 517 | 303 | 71 | 205 | 215 | 246 | 0.276596 | 0.744224 | 0.278530 | 0.135314 |
| ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... | ... |
| 466 | Cars166 | 300 | 400 | 148 | 123 | 201 | 150 | 0.581667 | 0.341250 | 0.176667 | 0.067500 |
| 467 | Cars60 | 400 | 300 | 45 | 98 | 364 | 159 | 0.511250 | 0.428333 | 0.797500 | 0.203333 |
| 468 | Cars52 | 400 | 300 | 226 | 181 | 327 | 210 | 0.691250 | 0.651667 | 0.252500 | 0.096667 |
| 469 | Cars297 | 400 | 233 | 158 | 149 | 247 | 170 | 0.506250 | 0.684549 | 0.222500 | 0.090129 |
| 470 | Cars349 | 400 | 267 | 38 | 225 | 146 | 249 | 0.230000 | 0.887640 | 0.270000 | 0.089888 |
471 rows × 11 columns
# split the images into training, validation and testing sets
from sklearn.model_selection import train_test_split
train, test = train_test_split(img_names, test_size=0.2, random_state=42)
test, val = train_test_split(test, test_size=0.7, random_state=42)
# create train, validation and test dir
os.chdir('/kaggle/working/')
os.mkdir('./yolov5/data/train')
os.mkdir('./yolov5/data/val')
os.mkdir('./yolov5/data/test')
os.mkdir('./yolov5/data/train/images')
os.mkdir('./yolov5/data/train/labels')
os.mkdir('./yolov5/data/test/images')
os.mkdir('./yolov5/data/test/labels')
os.mkdir('./yolov5/data/val/images')
os.mkdir('./yolov5/data/val/labels')
The function copyImages takes a list of image filenames (imageList) and a folder name (folder_Name) as input. Uses the PIL library to open each image from the input folder, and then saves the image to a new location in the YOLOv5 format,
# define the images copy function
from PIL import Image
def copyImages(imageList, folder_Name):
for image in imageList:
img = Image.open("../input/car-plate-detection/images/"+image)
img.save("./yolov5/data/"+folder_Name+"/images/"+image)
copyImages(train, "train")
copyImages(val, "val")
copyImages(test, "test")
Check if the images have been splited to target folder
path = "/kaggle/working/yolov5/data/train/images"
files = os.listdir(path)
print(files[0])
print(len(files))
Cars139.png 346
data_string = data_plate.astype('string')
The function writes the generated string to a .txt file with the same name as the image, but with a .txt extension, in the directory "./yolov5/data/"+data_name+"/labels/". This file will contain the labels for the corresponding image in the YOLOv5 format.
def create_labels(image_list, data_name):
#create a list of image names without .img
fileNames = [x.split(".")[0] for x in image_list]
# create dataframe where 'filename' column = name
for name in fileNames:
data = data_string[data_string.filename==name]
box_list = []
# add object class and 4 box parameters in box_list
for index in range(len(data)):
row = data.iloc[index]
box_list.append(str(0)+" "+row["x_center"]+" "+row["y_center"]\
+" "+row["frame_width"]+" "+row["frame_height"])
# convert the bound box information to a string for YOLOV5 to detect
text = "\n".join(box_list)
with open("./yolov5/data/"+data_name+"/labels/"+name+".txt", "w") as file:
file.write(text)
create_labels(train, "train")
create_labels(val, "val")
create_labels(test, "test")
Check if the label files have been created
folder_path = './yolov5/data/train/labels'
files = os.listdir(folder_path)
for file in files:
print(file)
break
Cars300.txt
with open('/kaggle/working/yolov5/data/train/labels/Cars295.txt', 'r') as file:
contents = file.read()
print(contents)
0 0.15625 0.6875 0.0525 0.046875 0 0.635 0.595703125 0.085 0.07421875
# go to YOLO directory
%cd yolov5
/kaggle/working/yolov5
Initializes Notebook display to show images produced by the YOLOv5 model.
from IPython.display import Image, clear_output
import torch
from yolov5 import utils
display = utils.notebook_init()
YOLOv5 🚀 v7.0-133-gcca5e21 Python-3.7.12 torch-1.9.0+cu111 CUDA:0 (Tesla P100-PCIE-16GB, 16281MiB)
Setup complete ✅ (2 CPUs, 15.6 GB RAM, 4557.6/8062.4 GB disk)
# define the train and val directories of yaml text, nc means number of class and names is the class name
yaml_text = """train: data/train/images
val: data/val/images
nc: 1
names: ['license']"""
# write yaml text to data.yaml
with open("data/data.yaml", 'w') as file:
file.write(yaml_text)
with open("data/data.yaml", 'r') as file:
print(file.read())
train: data/train/images val: data/val/images nc: 1 names: ['license']
The %writetemplate magic command takes a filename as the line argument and a cell as the cell argument. It opens the file in write mode using the filename provided, and writes the contents of the cell to the file using the format() method to replace placeholders with the values of variables from the global namespace using **globals().
#customize iPython writefile to write variables
from IPython.core.magic import register_line_cell_magic
@register_line_cell_magic
def writetemplate(line, cell):
with open(line, 'w') as f:
f.write(cell.format(**globals()))
This model has a YOLOv5s backbone with a custom head, designed for one-class object detection tasks. The backbone consists of a series of convolutional layers and bottleneck blocks, which are used to extract features from the input image.
%%writetemplate models/custom_yolov5s.yaml
# parameters
nc: 1 # number of classes
depth_multiple: 0.33 # model depth multiple
width_multiple: 0.50 # layer channel multiple
# anchors
anchors:
- [10,13, 16,30, 33,23]
- [30,61, 62,45, 59,119]
- [116,90, 156,198, 373,326]
# YOLOv5 backbone
backbone:
# [from, number, module, args]
[[-1, 1, Focus, [64, 3]],
[-1, 1, Conv, [128, 3, 2]],
[-1, 3, BottleneckCSP, [128]],
[-1, 1, Conv, [256, 3, 2]],
[-1, 9, BottleneckCSP, [256]],
[-1, 1, Conv, [512, 3, 2]],
[-1, 9, BottleneckCSP, [512]],
[-1, 1, Conv, [1024, 3, 2]],
[-1, 1, SPP, [1024, [5, 9, 13]]],
[-1, 3, BottleneckCSP, [1024, False]],
]
# YOLOv5 head
head:
[[-1, 1, Conv, [512, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 6], 1, Concat, [1]],
[-1, 3, BottleneckCSP, [512, False]],
[-1, 1, Conv, [256, 1, 1]],
[-1, 1, nn.Upsample, [None, 2, 'nearest']],
[[-1, 4], 1, Concat, [1]],
[-1, 3, BottleneckCSP, [256, False]],
[-1, 1, Conv, [256, 3, 2]],
[[-1, 14], 1, Concat, [1]],
[-1, 3, BottleneckCSP, [512, False]],
[-1, 1, Conv, [512, 3, 2]],
[[-1, 10], 1, Concat, [1]],
[-1, 3, BottleneckCSP, [1024, False]],
[[17, 20, 23], 1, Detect, [nc, anchors]],
]
# train yolov5s on data
# record the time of training
# --img: size of image;
# --data: The path to the data configuration file
# --cfg: The path to the model configuration file
# --name: The name of the results folder
# --cache: Enable caching images for faster training
start = datetime.now()
!python train.py --img 640 --batch 32 --epochs 50 --data data/data.yaml --cfg models/custom_yolov5s.yaml --weights yolov5s.pt --name yolov5s_results --cache
end = datetime.now()
wandb: WARNING ⚠️ wandb is deprecated and will be removed in a future release. See supported integrations at https://github.com/ultralytics/yolov5#integrations. wandb: (1) Create a W&B account wandb: (2) Use an existing W&B account wandb: (3) Don't visualize my results wandb: Enter your choice: (30 second timeout) wandb: W&B disabled due to login timeout. train: weights=yolov5s.pt, cfg=models/custom_yolov5s.yaml, data=data/data.yaml, hyp=data/hyps/hyp.scratch-low.yaml, epochs=50, batch_size=32, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, noplots=False, evolve=None, bucket=, cache=ram, image_weights=False, device=, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=8, project=runs/train, name=yolov5s_results, exist_ok=False, quad=False, cos_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=-1, seed=0, local_rank=-1, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest github: up to date with https://github.com/ultralytics/yolov5 ✅ requirements: YOLOv5 requirements "tqdm>=4.64.0" "tensorboard>=2.4.1" not found, attempting AutoUpdate... WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: tqdm>=4.64.0 in /opt/conda/lib/python3.7/site-packages (4.64.1) Requirement already satisfied: tensorboard>=2.4.1 in /opt/conda/lib/python3.7/site-packages (2.11.2) Requirement already satisfied: protobuf<4,>=3.9.2 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (3.20.3) Requirement already satisfied: setuptools>=41.0.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (67.6.1) Requirement already satisfied: grpcio>=1.24.3 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (1.51.1) Requirement already satisfied: werkzeug>=1.0.1 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (2.2.3) Requirement already satisfied: tensorboard-data-server<0.7.0,>=0.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (0.6.1) Requirement already satisfied: wheel>=0.26 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (0.38.4) Requirement already satisfied: markdown>=2.6.8 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (3.4.1) Requirement already satisfied: requests<3,>=2.21.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (2.28.2) Requirement already satisfied: google-auth-oauthlib<0.5,>=0.4.1 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (0.4.6) Requirement already satisfied: absl-py>=0.4 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (1.4.0) Requirement already satisfied: numpy>=1.12.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (1.21.6) Requirement already satisfied: tensorboard-plugin-wit>=1.6.0 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (1.8.1) Requirement already satisfied: google-auth<3,>=1.6.3 in /opt/conda/lib/python3.7/site-packages (from tensorboard>=2.4.1) (1.35.0) Requirement already satisfied: six>=1.9.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1) (1.16.0) Requirement already satisfied: pyasn1-modules>=0.2.1 in /opt/conda/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1) (0.2.8) Requirement already satisfied: rsa<5,>=3.1.4 in /opt/conda/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1) (4.9) Requirement already satisfied: cachetools<5.0,>=2.0.0 in /opt/conda/lib/python3.7/site-packages (from google-auth<3,>=1.6.3->tensorboard>=2.4.1) (4.2.4) Requirement already satisfied: requests-oauthlib>=0.7.0 in /opt/conda/lib/python3.7/site-packages (from google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.4.1) (1.3.1) Requirement already satisfied: importlib-metadata>=4.4 in /opt/conda/lib/python3.7/site-packages (from markdown>=2.6.8->tensorboard>=2.4.1) (4.11.4) Requirement already satisfied: idna<4,>=2.5 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.4.1) (3.4) Requirement already satisfied: certifi>=2017.4.17 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.4.1) (2022.12.7) Requirement already satisfied: charset-normalizer<4,>=2 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.4.1) (2.1.1) Requirement already satisfied: urllib3<1.27,>=1.21.1 in /opt/conda/lib/python3.7/site-packages (from requests<3,>=2.21.0->tensorboard>=2.4.1) (1.26.14) Requirement already satisfied: MarkupSafe>=2.1.1 in /opt/conda/lib/python3.7/site-packages (from werkzeug>=1.0.1->tensorboard>=2.4.1) (2.1.1) Requirement already satisfied: typing-extensions>=3.6.4 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard>=2.4.1) (4.4.0) Requirement already satisfied: zipp>=0.5 in /opt/conda/lib/python3.7/site-packages (from importlib-metadata>=4.4->markdown>=2.6.8->tensorboard>=2.4.1) (3.11.0) Requirement already satisfied: pyasn1<0.5.0,>=0.4.6 in /opt/conda/lib/python3.7/site-packages (from pyasn1-modules>=0.2.1->google-auth<3,>=1.6.3->tensorboard>=2.4.1) (0.4.8) Requirement already satisfied: oauthlib>=3.0.0 in /opt/conda/lib/python3.7/site-packages (from requests-oauthlib>=0.7.0->google-auth-oauthlib<0.5,>=0.4.1->tensorboard>=2.4.1) (3.2.2) requirements: 2 packages updated per /kaggle/working/yolov5/requirements.txt requirements: ⚠️ Restart runtime or rerun command for updates to take effect YOLOv5 🚀 v7.0-133-gcca5e21 Python-3.7.12 torch-1.9.0+cu111 CUDA:0 (Tesla P100-PCIE-16GB, 16281MiB) hyperparameters: lr0=0.01, lrf=0.01, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0 ClearML: run 'pip install clearml' to automatically track, visualize and remotely train YOLOv5 🚀 in ClearML Comet: run 'pip install comet_ml' to automatically track and visualize YOLOv5 🚀 runs in Comet TensorBoard: Start with 'tensorboard --logdir runs/train', view at http://localhost:6006/ Downloading https://github.com/ultralytics/yolov5/releases/download/v7.0/yolov5s.pt to yolov5s.pt... 100%|██████████████████████████████████████| 14.1M/14.1M [00:00<00:00, 77.5MB/s] from n params module arguments 0 -1 1 3520 models.common.Focus [3, 32, 3] 1 -1 1 18560 models.common.Conv [32, 64, 3, 2] 2 -1 1 19904 models.common.BottleneckCSP [64, 64, 1] 3 -1 1 73984 models.common.Conv [64, 128, 3, 2] 4 -1 3 161152 models.common.BottleneckCSP [128, 128, 3] 5 -1 1 295424 models.common.Conv [128, 256, 3, 2] 6 -1 3 641792 models.common.BottleneckCSP [256, 256, 3] 7 -1 1 1180672 models.common.Conv [256, 512, 3, 2] 8 -1 1 656896 models.common.SPP [512, 512, [5, 9, 13]] 9 -1 1 1248768 models.common.BottleneckCSP [512, 512, 1, False] 10 -1 1 131584 models.common.Conv [512, 256, 1, 1] 11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 12 [-1, 6] 1 0 models.common.Concat [1] 13 -1 1 378624 models.common.BottleneckCSP [512, 256, 1, False] 14 -1 1 33024 models.common.Conv [256, 128, 1, 1] 15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest'] 16 [-1, 4] 1 0 models.common.Concat [1] 17 -1 1 95104 models.common.BottleneckCSP [256, 128, 1, False] 18 -1 1 147712 models.common.Conv [128, 128, 3, 2] 19 [-1, 14] 1 0 models.common.Concat [1] 20 -1 1 313088 models.common.BottleneckCSP [256, 256, 1, False] 21 -1 1 590336 models.common.Conv [256, 256, 3, 2] 22 [-1, 10] 1 0 models.common.Concat [1] 23 -1 1 1248768 models.common.BottleneckCSP [512, 512, 1, False] 24 [17, 20, 23] 1 16182 models.yolo.Detect [1, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]] custom_YOLOv5s summary: 233 layers, 7255094 parameters, 7255094 gradients Transferred 223/369 items from yolov5s.pt AMP: checks passed ✅ optimizer: SGD(lr=0.01) with parameter groups 59 weight(decay=0.0), 70 weight(decay=0.0005), 62 bias albumentations: Blur(p=0.01, blur_limit=(3, 7)), MedianBlur(p=0.01, blur_limit=(3, 7)), ToGray(p=0.01), CLAHE(p=0.01, clip_limit=(1, 4.0), tile_grid_size=(8, 8)) train: Scanning /kaggle/working/yolov5/data/train/labels... 346 images, 0 backgr train: New cache created: /kaggle/working/yolov5/data/train/labels.cache train: Caching images (0.3GB ram): 100%|██████████| 346/346 [00:02<00:00, 156.71 val: Scanning /kaggle/working/yolov5/data/val/labels... 61 images, 0 backgrounds val: New cache created: /kaggle/working/yolov5/data/val/labels.cache val: Caching images (0.0GB ram): 100%|██████████| 61/61 [00:00<00:00, 65.73it/s] [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) [W pthreadpool-cpp.cc:90] Warning: Leaking Caffe2 thread-pool after fork. (function pthreadpool) AutoAnchor: 4.11 anchors/target, 1.000 Best Possible Recall (BPR). Current anchors are a good fit to dataset ✅ Plotting labels to runs/train/yolov5s_results/labels.jpg... Image sizes 640 train, 640 val Using 2 dataloader workers Logging results to runs/train/yolov5s_results Starting training for 50 epochs... Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 0/49 7.41G 0.1115 0.02685 0 60 640: train.py:323: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients 0/49 8.83G 0.1106 0.02752 0 58 640: 1 Class Images Instances P R mAP50 WARNING ⚠️ NMS time limit 3.550s exceeded Class Images Instances P R mAP50 all 61 69 0.000123 0.0145 6.59e-05 2.64e-05 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 1/49 8.83G 0.1048 0.02641 0 54 640: 1 Class Images Instances P R mAP50 WARNING ⚠️ NMS time limit 3.550s exceeded Class Images Instances P R mAP50 all 61 69 0.000115 0.0145 5.96e-05 2.98e-05 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 2/49 8.83G 0.1006 0.02568 0 82 640: train.py:323: FutureWarning: Non-finite norm encountered in torch.nn.utils.clip_grad_norm_; continuing anyway. Note that the default behavior will change in a future release to error out if a non-finite total norm is encountered. At that point, setting error_if_nonfinite=false will be required to retain the old behavior. torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=10.0) # clip gradients 2/49 8.83G 0.1001 0.02529 0 47 640: 1 Class Images Instances P R mAP50 all 61 69 0.000656 0.174 0.000398 0.000112 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 3/49 8.83G 0.09659 0.025 0 52 640: 1 Class Images Instances P R mAP50 all 61 69 0.000437 0.116 0.000264 8.2e-05 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 4/49 8.83G 0.09343 0.02416 0 54 640: 1 Class Images Instances P R mAP50 all 61 69 0.000219 0.058 0.000121 3.26e-05 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 5/49 8.83G 0.08991 0.02412 0 54 640: 1 Class Images Instances P R mAP50 all 61 69 0.000546 0.145 0.000375 0.00012 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 6/49 8.83G 0.08667 0.0244 0 47 640: 1 Class Images Instances P R mAP50 all 61 69 0.000546 0.145 0.000403 0.00014 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 7/49 8.83G 0.08328 0.02387 0 67 640: 1 Class Images Instances P R mAP50 all 61 69 0.00109 0.29 0.00388 0.000723 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 8/49 8.83G 0.07852 0.02473 0 60 640: 1 Class Images Instances P R mAP50 all 61 69 0.00082 0.217 0.00413 0.00106 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 9/49 8.83G 0.07636 0.02407 0 62 640: 1 Class Images Instances P R mAP50 all 61 69 0.00251 0.667 0.00485 0.00128 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 10/49 8.83G 0.0726 0.02416 0 65 640: 1 Class Images Instances P R mAP50 all 61 69 0.00257 0.681 0.0626 0.013 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 11/49 8.83G 0.07056 0.02373 0 67 640: 1 Class Images Instances P R mAP50 all 61 69 0.0601 0.188 0.0577 0.0123 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 12/49 8.83G 0.06751 0.02166 0 41 640: 1 Class Images Instances P R mAP50 all 61 69 0.258 0.42 0.281 0.0694 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 13/49 8.83G 0.06459 0.02099 0 45 640: 1 Class Images Instances P R mAP50 all 61 69 0.184 0.261 0.13 0.0309 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 14/49 8.83G 0.06164 0.02086 0 57 640: 1 Class Images Instances P R mAP50 all 61 69 0.376 0.391 0.301 0.0962 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 15/49 8.83G 0.06065 0.02002 0 50 640: 1 Class Images Instances P R mAP50 all 61 69 0.39 0.348 0.284 0.0776 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 16/49 8.83G 0.06014 0.0201 0 60 640: 1 Class Images Instances P R mAP50 all 61 69 0.464 0.319 0.272 0.0709 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 17/49 8.83G 0.05726 0.01888 0 50 640: 1 Class Images Instances P R mAP50 all 61 69 0.413 0.377 0.341 0.127 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 18/49 8.83G 0.0564 0.01893 0 51 640: 1 Class Images Instances P R mAP50 all 61 69 0.628 0.493 0.472 0.142 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 19/49 8.83G 0.0553 0.01922 0 49 640: 1 Class Images Instances P R mAP50 all 61 69 0.847 0.449 0.567 0.208 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 20/49 8.83G 0.05325 0.0172 0 52 640: 1 Class Images Instances P R mAP50 all 61 69 0.658 0.551 0.535 0.18 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 21/49 8.83G 0.05227 0.01784 0 44 640: 1 Class Images Instances P R mAP50 all 61 69 0.755 0.446 0.468 0.206 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 22/49 8.83G 0.05051 0.01607 0 41 640: 1 Class Images Instances P R mAP50 all 61 69 0.506 0.464 0.474 0.153 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 23/49 8.83G 0.05168 0.01764 0 57 640: 1 Class Images Instances P R mAP50 all 61 69 0.834 0.522 0.59 0.199 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 24/49 8.83G 0.04976 0.01539 0 51 640: 1 Class Images Instances P R mAP50 all 61 69 0.798 0.515 0.568 0.256 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 25/49 8.83G 0.04939 0.01552 0 47 640: 1 Class Images Instances P R mAP50 all 61 69 0.853 0.591 0.656 0.226 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 26/49 8.83G 0.04906 0.01532 0 50 640: 1 Class Images Instances P R mAP50 all 61 69 0.849 0.58 0.621 0.232 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 27/49 8.83G 0.04682 0.01521 0 57 640: 1 Class Images Instances P R mAP50 all 61 69 0.722 0.609 0.602 0.249 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 28/49 8.83G 0.04646 0.01461 0 60 640: 1 Class Images Instances P R mAP50 all 61 69 0.885 0.536 0.643 0.271 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 29/49 8.83G 0.0459 0.01556 0 56 640: 1 Class Images Instances P R mAP50 all 61 69 0.778 0.56 0.674 0.285 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 30/49 8.83G 0.04532 0.0139 0 45 640: 1 Class Images Instances P R mAP50 all 61 69 0.889 0.565 0.709 0.346 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 31/49 8.83G 0.04387 0.01349 0 58 640: 1 Class Images Instances P R mAP50 all 61 69 0.925 0.565 0.709 0.322 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 32/49 8.83G 0.04335 0.01399 0 56 640: 1 Class Images Instances P R mAP50 all 61 69 0.764 0.652 0.717 0.33 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 33/49 8.83G 0.04237 0.01363 0 55 640: 1 Class Images Instances P R mAP50 all 61 69 0.797 0.609 0.704 0.327 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 34/49 8.83G 0.04264 0.01417 0 53 640: 1 Class Images Instances P R mAP50 all 61 69 0.929 0.609 0.719 0.33 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 35/49 8.83G 0.042 0.01411 0 60 640: 1 Class Images Instances P R mAP50 all 61 69 0.946 0.594 0.718 0.346 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 36/49 8.83G 0.04062 0.01341 0 41 640: 1 Class Images Instances P R mAP50 all 61 69 0.977 0.605 0.758 0.349 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 37/49 8.83G 0.04016 0.01266 0 55 640: 1 Class Images Instances P R mAP50 all 61 69 0.94 0.623 0.765 0.374 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 38/49 8.83G 0.03911 0.01385 0 50 640: 1 Class Images Instances P R mAP50 all 61 69 0.832 0.652 0.737 0.364 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 39/49 8.83G 0.04006 0.01272 0 47 640: 1 Class Images Instances P R mAP50 all 61 69 0.787 0.695 0.772 0.39 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 40/49 8.83G 0.03915 0.01331 0 59 640: 1 Class Images Instances P R mAP50 all 61 69 0.812 0.681 0.787 0.379 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 41/49 8.83G 0.03785 0.01297 0 59 640: 1 Class Images Instances P R mAP50 all 61 69 0.955 0.609 0.801 0.413 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 42/49 8.83G 0.0376 0.0121 0 50 640: 1 Class Images Instances P R mAP50 all 61 69 0.702 0.71 0.744 0.382 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 43/49 8.83G 0.03681 0.01221 0 50 640: 1 Class Images Instances P R mAP50 all 61 69 0.762 0.754 0.777 0.4 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 44/49 8.83G 0.03758 0.01268 0 66 640: 1 Class Images Instances P R mAP50 all 61 69 0.832 0.739 0.778 0.402 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 45/49 8.83G 0.03554 0.01176 0 51 640: 1 Class Images Instances P R mAP50 all 61 69 0.81 0.741 0.786 0.4 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 46/49 8.83G 0.03598 0.01197 0 38 640: 1 Class Images Instances P R mAP50 all 61 69 0.821 0.731 0.805 0.409 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 47/49 8.83G 0.0357 0.01168 0 55 640: 1 Class Images Instances P R mAP50 all 61 69 0.859 0.739 0.819 0.427 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 48/49 8.83G 0.03593 0.01212 0 59 640: 1 Class Images Instances P R mAP50 all 61 69 0.845 0.739 0.817 0.419 Epoch GPU_mem box_loss obj_loss cls_loss Instances Size 49/49 8.83G 0.03504 0.0121 0 48 640: 1 Class Images Instances P R mAP50 all 61 69 0.823 0.71 0.812 0.434 50 epochs completed in 0.091 hours. Optimizer stripped from runs/train/yolov5s_results/weights/last.pt, 14.9MB Optimizer stripped from runs/train/yolov5s_results/weights/best.pt, 14.9MB Validating runs/train/yolov5s_results/weights/best.pt... Fusing layers... custom_YOLOv5s summary: 182 layers, 7246518 parameters, 0 gradients Class Images Instances P R mAP50 all 61 69 0.82 0.71 0.812 0.432 Results saved to runs/train/yolov5s_results
print("Runtime =",end-start)
Runtime = 0:06:48.681023
# check the train model result list
file_list = os.listdir('runs/train/yolov5s_results')
file_list
['val_batch0_labels.jpg', 'confusion_matrix.png', 'results.csv', 'labels.jpg', 'opt.yaml', 'F1_curve.png', 'train_batch1.jpg', 'labels_correlogram.jpg', 'train_batch0.jpg', 'train_batch2.jpg', 'events.out.tfevents.1680435340.3596ba3427d1.1223.0', 'PR_curve.png', 'hyp.yaml', 'results.png', 'weights', 'val_batch0_pred.jpg', 'P_curve.png', 'R_curve.png']
plt.figure(figsize=(30,15))
plt.axis('off')
plt.imshow(plt.imread('/kaggle/working/yolov5/runs/train/yolov5s_results/results.png'))
<matplotlib.image.AxesImage at 0x7f7b45d60f90>
# visualize the train results
img = plt.imread('runs/train/yolov5s_results/train_batch2.jpg')
plt.figure(figsize=(20,15))
plt.imshow(img)
plt.axis('off')
plt.show()
# apply YOLOv5 framework on test images
# --conf 0.4:
!python detect.py --source data/test/images/ --weight runs/train/yolov5s_results/weights/best.pt --name expTestImage --conf 0.4
detect: weights=['runs/train/yolov5s_results/weights/best.pt'], source=data/test/images/, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.4, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=expTestImage, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 requirements: YOLOv5 requirement "tqdm>=4.64.0" not found, attempting AutoUpdate... WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: tqdm>=4.64.0 in /opt/conda/lib/python3.7/site-packages (4.64.1) requirements: 1 package updated per /kaggle/working/yolov5/requirements.txt requirements: ⚠️ Restart runtime or rerun command for updates to take effect YOLOv5 🚀 v7.0-133-gcca5e21 Python-3.7.12 torch-1.9.0+cu111 CUDA:0 (Tesla P100-PCIE-16GB, 16281MiB) Fusing layers... custom_YOLOv5s summary: 182 layers, 7246518 parameters, 0 gradients image 1/26 /kaggle/working/yolov5/data/test/images/Cars100.png: 448x640 1 license, 10.5ms image 2/26 /kaggle/working/yolov5/data/test/images/Cars102.png: 512x640 (no detections), 12.2ms image 3/26 /kaggle/working/yolov5/data/test/images/Cars105.png: 384x640 3 licenses, 10.6ms image 4/26 /kaggle/working/yolov5/data/test/images/Cars110.png: 448x640 1 license, 7.7ms image 5/26 /kaggle/working/yolov5/data/test/images/Cars145.png: 448x640 2 licenses, 7.3ms image 6/26 /kaggle/working/yolov5/data/test/images/Cars151.png: 416x640 1 license, 11.0ms image 7/26 /kaggle/working/yolov5/data/test/images/Cars153.png: 384x640 1 license, 7.9ms image 8/26 /kaggle/working/yolov5/data/test/images/Cars171.png: 448x640 1 license, 7.9ms image 9/26 /kaggle/working/yolov5/data/test/images/Cars210.png: 448x640 1 license, 7.4ms image 10/26 /kaggle/working/yolov5/data/test/images/Cars230.png: 384x640 1 license, 9.7ms image 11/26 /kaggle/working/yolov5/data/test/images/Cars243.png: 480x640 1 license, 11.7ms image 12/26 /kaggle/working/yolov5/data/test/images/Cars246.png: 448x640 1 license, 7.9ms image 13/26 /kaggle/working/yolov5/data/test/images/Cars26.png: 384x640 1 license, 8.1ms image 14/26 /kaggle/working/yolov5/data/test/images/Cars263.png: 480x640 1 license, 7.5ms image 15/26 /kaggle/working/yolov5/data/test/images/Cars265.png: 352x640 1 license, 11.4ms image 16/26 /kaggle/working/yolov5/data/test/images/Cars277.png: 480x640 1 license, 7.7ms image 17/26 /kaggle/working/yolov5/data/test/images/Cars278.png: 352x640 (no detections), 7.9ms image 18/26 /kaggle/working/yolov5/data/test/images/Cars29.png: 480x640 1 license, 8.0ms image 19/26 /kaggle/working/yolov5/data/test/images/Cars302.png: 320x640 2 licenses, 10.4ms image 20/26 /kaggle/working/yolov5/data/test/images/Cars306.png: 640x640 1 license, 8.3ms image 21/26 /kaggle/working/yolov5/data/test/images/Cars309.png: 384x640 1 license, 7.6ms image 22/26 /kaggle/working/yolov5/data/test/images/Cars325.png: 384x640 1 license, 8.9ms image 23/26 /kaggle/working/yolov5/data/test/images/Cars363.png: 448x640 1 license, 7.4ms image 24/26 /kaggle/working/yolov5/data/test/images/Cars45.png: 416x640 1 license, 7.8ms image 25/26 /kaggle/working/yolov5/data/test/images/Cars9.png: 512x640 1 license, 7.2ms image 26/26 /kaggle/working/yolov5/data/test/images/Cars91.png: 416x640 1 license, 7.5ms Speed: 0.4ms pre-process, 8.8ms inference, 0.8ms NMS per image at shape (1, 3, 640, 640) Results saved to runs/detect/expTestImage
# show the trained test image example
dir_path = '/kaggle/working/yolov5/runs/detect/expTestImage'
files = os.listdir(dir_path)
files[1]
'Cars246.png'
Image("/kaggle/working/yolov5/runs/detect/expTestImage/Cars246.png")
folder_name = 'new_picture'
if not os.path.exists(folder_name):
os.makedirs(folder_name)
# get some new images to test the YOLOV5 model
import shutil
shutil.copy('/kaggle/input/new-car-picture/new1.png', '/kaggle/working/yolov5/new_picture/new1.png')
shutil.copy('/kaggle/input/new-car-picture/new2.png', '/kaggle/working/yolov5/new_picture/new2.png')
'/kaggle/working/yolov5/new_picture/new2.png'
dir_path = '/kaggle/working/yolov5/new_picture'
files = os.listdir(dir_path)
files
['new1.png', 'new2.png']
!python detect.py --source "/kaggle/working/yolov5/new_picture" --weights '/kaggle/working/yolov5/runs/train/yolov5s_results/weights/best.pt'
detect: weights=['/kaggle/working/yolov5/runs/train/yolov5s_results/weights/best.pt'], source=/kaggle/working/yolov5/new_picture, data=data/coco128.yaml, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False, dnn=False, vid_stride=1 requirements: YOLOv5 requirement "tqdm>=4.64.0" not found, attempting AutoUpdate... WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: tqdm>=4.64.0 in /opt/conda/lib/python3.7/site-packages (4.64.1) requirements: 1 package updated per /kaggle/working/yolov5/requirements.txt requirements: ⚠️ Restart runtime or rerun command for updates to take effect YOLOv5 🚀 v7.0-133-gcca5e21 Python-3.7.12 torch-1.9.0+cu111 CUDA:0 (Tesla P100-PCIE-16GB, 16281MiB) Fusing layers... custom_YOLOv5s summary: 182 layers, 7246518 parameters, 0 gradients image 1/2 /kaggle/working/yolov5/new_picture/new1.png: 288x640 1 license, 11.2ms image 2/2 /kaggle/working/yolov5/new_picture/new2.png: 416x640 2 licenses, 10.2ms Speed: 0.3ms pre-process, 10.7ms inference, 1.2ms NMS per image at shape (1, 3, 640, 640) Results saved to runs/detect/exp
#show the results
Image("/kaggle/working/yolov5/runs/detect/exp/new2.png")
# https://www.kaggle.com/code/gowrishankarp/license-plate-detection-yolov5-pytesseract
# https://www.kaggle.com/code/saworz/realtime-plate-reading-video-yolov5-easyocr
# https://www.kaggle.com/code/rohitgadhwar/face-mask-detection-yolov5
The advantages of YOLOv5 include its speed, simplicity, and end-to-end training. It can be a good choice for real-time object detection tasks, such as autonomous driving or video surveillance. However, its performance may not be as good as more complex two-stage detectors on some datasets with small or heavily occluded objects.
On the other hand, a typical CNN model may be more suitable for image classification tasks where the goal is to predict the correct label for an entire image, rather than detecting and localizing specific objects within the image. Therefore, choose the yolo5 model for the car license task is more suitable than a typical CNN model.